40 research outputs found

    A Factor Graph Approach to Multi-Camera Extrinsic Calibration on Legged Robots

    Full text link
    Legged robots are becoming popular not only in research, but also in industry, where they can demonstrate their superiority over wheeled machines in a variety of applications. Either when acting as mobile manipulators or just as all-terrain ground vehicles, these machines need to precisely track the desired base and end-effector trajectories, perform Simultaneous Localization and Mapping (SLAM), and move in challenging environments, all while keeping balance. A crucial aspect for these tasks is that all onboard sensors must be properly calibrated and synchronized to provide consistent signals for all the software modules they feed. In this paper, we focus on the problem of calibrating the relative pose between a set of cameras and the base link of a quadruped robot. This pose is fundamental to successfully perform sensor fusion, state estimation, mapping, and any other task requiring visual feedback. To solve this problem, we propose an approach based on factor graphs that jointly optimizes the mutual position of the cameras and the robot base using kinematics and fiducial markers. We also quantitatively compare its performance with other state-of-the-art methods on the hydraulic quadruped robot HyQ. The proposed approach is simple, modular, and independent from external devices other than the fiducial marker.Comment: To appear on "The Third IEEE International Conference on Robotic Computing (IEEE IRC 2019)

    Robust Legged Robot State Estimation Using Factor Graph Optimization

    Full text link
    Legged robots, specifically quadrupeds, are becoming increasingly attractive for industrial applications such as inspection. However, to leave the laboratory and to become useful to an end user requires reliability in harsh conditions. From the perspective of state estimation, it is essential to be able to accurately estimate the robot's state despite challenges such as uneven or slippery terrain, textureless and reflective scenes, as well as dynamic camera occlusions. We are motivated to reduce the dependency on foot contact classifications, which fail when slipping, and to reduce position drift during dynamic motions such as trotting. To this end, we present a factor graph optimization method for state estimation which tightly fuses and smooths inertial navigation, leg odometry and visual odometry. The effectiveness of the approach is demonstrated using the ANYmal quadruped robot navigating in a realistic outdoor industrial environment. This experiment included trotting, walking, crossing obstacles and ascending a staircase. The proposed approach decreased the relative position error by up to 55% and absolute position error by 76% compared to kinematic-inertial odometry.Comment: 8 pages, 12 figures. Accepted to RA-L + IROS 2019, July 201

    Preintegrated Velocity Bias Estimation to Overcome Contact Nonlinearities in Legged Robot Odometry

    Full text link
    In this paper, we present a novel factor graph formulation to estimate the pose and velocity of a quadruped robot on slippery and deformable terrain. The factor graph introduces a preintegrated velocity factor that incorporates velocity inputs from leg odometry and also estimates related biases. From our experimentation we have seen that it is difficult to model uncertainties at the contact point such as slip or deforming terrain, as well as leg flexibility. To accommodate for these effects and to minimize leg odometry drift, we extend the robot's state vector with a bias term for this preintegrated velocity factor. The bias term can be accurately estimated thanks to the tight fusion of the preintegrated velocity factor with stereo vision and IMU factors, without which it would be unobservable. The system has been validated on several scenarios that involve dynamic motions of the ANYmal robot on loose rocks, slopes and muddy ground. We demonstrate a 26% improvement of relative pose error compared to our previous work and 52% compared to a state-of-the-art proprioceptive state estimator.Comment: Accepted to ICRA 2020. Video: youtu.be/w1Sx6dIqgQ

    Haptic Sequential Monte Carlo Localization for Quadrupedal Locomotion in Vision-Denied Scenarios

    Full text link
    Continuous robot operation in extreme scenarios such as underground mines or sewers is difficult because exteroceptive sensors may fail due to fog, darkness, dirt or malfunction. So as to enable autonomous navigation in these kinds of situations, we have developed a type of proprioceptive localization which exploits the foot contacts made by a quadruped robot to localize against a prior map of an environment, without the help of any camera or LIDAR sensor. The proposed method enables the robot to accurately re-localize itself after making a sequence of contact events over a terrain feature. The method is based on Sequential Monte Carlo and can support both 2.5D and 3D prior map representations. We have tested the approach online and onboard the ANYmal quadruped robot in two different scenarios: the traversal of a custom built wooden terrain course and a wall probing and following task. In both scenarios, the robot is able to effectively achieve a localization match and to execute a desired pre-planned path. The method keeps the localization error down to 10cm on feature rich terrain by only using its feet, kinematic and inertial sensing.Comment: 7 pages, 8 figures, 1 table. Accepted at IEEE/RSJ IROS 202

    Fast and Continuous Foothold Adaptation for Dynamic Locomotion through CNNs

    Get PDF
    Legged robots can outperform wheeled machines for most navigation tasks across unknown and rough terrains. For such tasks, visual feedback is a fundamental asset to provide robots with terrain-awareness. However, robust dynamic locomotion on difficult terrains with real-time performance guarantees remains a challenge. We present here a real-time, dynamic foothold adaptation strategy based on visual feedback. Our method adjusts the landing position of the feet in a fully reactive manner, using only on-board computers and sensors. The correction is computed and executed continuously along the swing phase trajectory of each leg. To efficiently adapt the landing position, we implement a self-supervised foothold classifier based on a Convolutional Neural Network (CNN). Our method results in an up to 200 times faster computation with respect to the full-blown heuristics. Our goal is to react to visual stimuli from the environment, bridging the gap between blind reactive locomotion and purely vision-based planning strategies. We assess the performance of our method on the dynamic quadruped robot HyQ, executing static and dynamic gaits (at speeds up to 0.5 m/s) in both simulated and real scenarios; the benefit of safe foothold adaptation is clearly demonstrated by the overall robot behavior.Comment: 9 pages, 11 figures. Accepted to RA-L + ICRA 2019, January 201

    Deep IMU Bias Inference for Robust Visual-Inertial Odometry with Factor Graphs

    Full text link
    Visual Inertial Odometry (VIO) is one of the most established state estimation methods for mobile platforms. However, when visual tracking fails, VIO algorithms quickly diverge due to rapid error accumulation during inertial data integration. This error is typically modeled as a combination of additive Gaussian noise and a slowly changing bias which evolves as a random walk. In this work, we propose to train a neural network to learn the true bias evolution. We implement and compare two common sequential deep learning architectures: LSTMs and Transformers. Our approach follows from recent learning-based inertial estimators, but, instead of learning a motion model, we target IMU bias explicitly, which allows us to generalize to locomotion patterns unseen in training. We show that our proposed method improves state estimation in visually challenging situations across a wide range of motions by quadrupedal robots, walking humans, and drones. Our experiments show an average 15% reduction in drift rate, with much larger reductions when there is total vision failure. Importantly, we also demonstrate that models trained with one locomotion pattern (human walking) can be applied to another (quadruped robot trotting) without retraining.Comment: Accepted to Robotics and Automation Letter

    Hand Segmentation for Gesture Recognition in EGO-Vision

    Get PDF
    Portable devices for first-person camera views will play a central role in future interactive systems. One necessary step for feasible human-computer guided activities is gesture recognition, preceded by a reliable hand segmentation from egocentric vision. In this work we provide a novel hand segmentation algorithm based on Random Forest superpixel classification that integrates light, time and space consistency. We also propose a gesture recognition method based Exemplar SVMs since it requires a only small set of positive samples, hence it is well suitable for the egocentric video applications. Furthermore, this method is enhanced by using segmented images instead of full frames during test phase. Experimental results show that our hand segmentation algorithm outperforms the state-of-the-art approaches and improves the gesture recognition accuracy on both the publicly available EDSH dataset and our dataset designed for cultural heritage applications

    Prospektion einer Villa rustica bei Wederath, Flur Hinterm Klop (Gde. Morbach, Kr. Bernkastel-Wittlich, Rheinland-Pfalz)

    Get PDF
    Prospektionen der Universität Leipzig im Umkreis des römischen vicus Belginum ergaben bei Wederath, Flur Hinterm Klop (Gde. Morbach, Kr. Bernkastel-Wittlich, Rheinland Pfalz) eine mutmaßliche villa rustica. Unterschiedlich große Steinkonzentrationen weisen auf ein Haupt- und mehrere Nebengebäude. Das Fundmaterial besteht vorwiegend aus Ziegeln (Dach-, Fußboden- und Hypokaustenziegeln) und relativ wenig Keramik. Die bestimmbare Keramik datiert in das 2./3. Jh. n. Chr.Archaeological prospections by the University of Leipzig in the surroundings of the Roman vicus Belginum revealed near Wederath (Rhenania-Palatinate, Germany) a villa rustica. Stone concentrations of different dimensions are indicating the main and several minor buildings. The finds are consisting mainly of bricks and relatively few ceramic sherds. The ceramics are dating into the 2nd/3rd cent. AD

    Prospektion im Tempelbezirk 3 des römischen vicus Belginum (OT Wederath, Gde. Morbach, Kr. Bernkastel-Wittlich, Rheinland-Pfalz)

    Get PDF
    Im Rahmen des mehrjährigen Prospektionsprogramms der Universität Leipzig im Bereich des römischen vicus Belginum (OT Wederath, Gde. Morbach, Kr. Bernkastel-Wittlich, Rheinland Pfalz) wurde im November 2007 der Bereich von Tempelbezirk 3 und seiner Umgebung begangen. Dieser Tempelbezirk liegt im Westen des vicus und ist bislang nur durch eine geophysikalische Prospektion bekannt. Die Begehungen erbrachten eine perfekte Übereinstimmung mit dem Magnetometerbild mit Konzentrationen im Bereich des gallorömischen Umgangstempels und der Temenosmauer. Die Funde ermöglichen eine Datierung in das 1. bis 2. und vielleicht das frühe 3. Jh. n. Chr. Bemerkenswert ist die Entdeckung mittel- und spätlatènezeitlicher Scherben des 3.-1. Jh. v. Chr. Sie weisen entweder auf eine ältere Besiedlung oder eine frühe Kulttätigkeit.Since several years, the University of Leipzig is carrying out archaeological prospections in the Roman vicus Belginum and its surroundings (OT Wederath, Gde. Morbach, Kr. Bernkastel-Wittlich, Rhenania-Palatinate, Germany). In November 2007 the temple area 3 was prospected. This site is located at the western periphery of the vicus. It was until yet only known by geophysical prospection. The actual prospection showed a perfect coincidence of findings and the magnetometer plot. High frequencies of finds, mostly brick fragments, were found at the Temenos walls and the site of the Gallo-Roman temple. The few ceramic sherds are dating from the 1st to the 2nd, probably the early 3rd cent. AD. Remarkable was the find of hand-made pottery, dating into the middle and late Latène period (approx. 3rd-1st cent. BC)
    corecore